35,298 research outputs found

    Software for web-based tic suppression training [version 2; referees: 3 approved]

    Get PDF
    Exposure and response prevention (ERP) is a first-line behavior therapy for obsessive-compulsive disorder and Tourette syndrome (TS). However, ERP for tic disorders requires intentional tic suppression, which for some patients is difficult even for brief periods. Additionally, practical access to behavior therapy is difficult for many patients, especially those in rural areas. The authors present a simple, working web platform (TicTrainer) that implements a strategy called reward-enhanced exposure and response prevention (RE–ERP). This strategy sacrifices most expert therapist components of ERP, focusing only on increasing the duration of time for which the user can suppress tics through automated differential reinforcement of tic-free periods (DRO). RE–ERP requires an external tic monitor, such as a parent, during training sessions. The user sees increasing digital rewards for longer and longer periods of successful tic suppression, similar to a video game score. TicTrainer is designed with security in mind, storing no personally identifiable health information, and has features to facilitate research, including optional masked comparison of tics during DRO vs. noncontingent reward conditions. A working instance of TicTrainer is available from https://tictrainer.com

    Reducing regression test size by exclusion.

    Get PDF
    Operational software is constantly evolving. Regression testing is used to identify the unintended consequences of evolutionary changes. As most changes affect only a small proportion of the system, the challenge is to ensure that the regression test set is both safe (all relevant tests are used) and unclusive (only relevant tests are used). Previous approaches to reducing test sets struggle to find safe and inclusive tests by looking only at the changed code. We use decomposition program slicing to safely reduce the size of regression test sets by identifying those parts of a system that could not have been affected by a change; this information will then direct the selection of regression tests by eliminating tests that are not relevant to the change. The technique properly accounts for additions and deletions of code. We extend and use Rothermel and Harrold’s framework for measuring the safety of regression test sets and introduce new safety and precision measures that do not require a priori knowledge of the exact number of modification-revealing tests. We then analytically evaluate and compare our techniques for producing reduced regression test sets

    Reducing regression test size by exclusion.

    Get PDF
    Operational software is constantly evolving. Regression testing is used to identify the unintended consequences of evolutionary changes. As most changes affect only a small proportion of the system, the challenge is to ensure that the regression test set is both safe (all relevant tests are used) and unclusive (only relevant tests are used). Previous approaches to reducing test sets struggle to find safe and inclusive tests by looking only at the changed code. We use decomposition program slicing to safely reduce the size of regression test sets by identifying those parts of a system that could not have been affected by a change; this information will then direct the selection of regression tests by eliminating tests that are not relevant to the change. The technique properly accounts for additions and deletions of code. We extend and use Rothermel and Harrold’s framework for measuring the safety of regression test sets and introduce new safety and precision measures that do not require a priori knowledge of the exact number of modification-revealing tests. We then analytically evaluate and compare our techniques for producing reduced regression test sets

    Letter to Sound Rules for Accented Lexicon Compression

    Get PDF
    This paper presents trainable methods for generating letter to sound rules from a given lexicon for use in pronouncing out-of-vocabulary words and as a method for lexicon compression. As the relationship between a string of letters and a string of phonemes representing its pronunciation for many languages is not trivial, we discuss two alignment procedures, one fully automatic and one hand-seeded which produce reasonable alignments of letters to phones. Top Down Induction Tree models are trained on the aligned entries. We show how combined phoneme/stress prediction is better than separate prediction processes, and still better when including in the model the last phonemes transcribed and part of speech information. For the lexicons we have tested, our models have a word accuracy (including stress) of 78% for OALD, 62% for CMU and 94% for BRULEX. The extremely high scores on the training sets allow substantial size reductions (more than 1/20). WWW site: http://tcts.fpms.ac.be/synthesis/mbrdicoComment: 4 pages 1 figur

    Was the Accounting Profession Really That Bad?

    Get PDF
    To gain insight into the extent of malpractice in the State of California prior to the Passage of Sarbanes-Oxley, we examined the nature and magnitude of complains filed with the California Board of Accountancy (CBA) against both licensed and unlicensed accountants during the fiscal years 2000, 2001, and 2002. The CBA currently licenses and regulates over 73,000 licenses, with 1,431 complaints filed during the period reviewed. Disciplinary actions were taken against 283 different licensees for the three fiscal years reviewed. SEC issues were involved in 19 cases, theft or embezzlement 46 cases, public accounting malpractice 146 cases, improper retention of client records 11 cases, cheating on the CPA examination 9 cases, and miscellaneous other 52 cases. Over half of the complaints involved public accounting issues. Audit related complaints accounted for 48%, tax related complaints 36%, and compilations or reviews accounted for 16% of the complaints. These statistics were in line with the experience of the AICPA Professional Liability program. Within the above sections, the paper contains specifics with regards to the most common problems identified as a result of this work. While a number of interesting facts were discovered, one item of particularly interest was the significant number of claims that involved non-profit organizations. CBA administrators do not believe there is any greater tendency for non profit reporting versus for profit reporting, thus appearing to indicate this is just an area that has a greater possibility of accounting malpractice

    How the credit channel works: differentiating the bank lending channel and the balance sheet channel

    Get PDF
    The credit channel of monetary policy transmission operates through changes in lending. To examine this channel, we explore how movements in the real federal funds rate affect bank lending. Using data on individual loans from the Survey of Terms of Bank Lending, we are able to differentiate two ways the credit channel can work: by affecting overall bank lending (the bank lending channel) and by affecting the allocation of loans (the balance sheet channel). We find evidence consistent with the operation of both internal credit channels. During periods of tight monetary policy, banks adjust their stock of loans by reducing the maturity of loan originations and they reallocate their short-term loan supply from small firms to large firms. These results are stronger for large banks than for small banks.Monetary policy ; Bank loans

    Search for Contact Interactions in the Dimuon Final State at ATLAS

    Get PDF
    The Standard Model has been successful in describing many fundamental aspects of particle physics. However, there are some remaining puzzles that are not explained within the context of its present framework. We discuss the possibility to discover new physics in the ATLAS Detector via a four-fermion contact interaction, much in the same way Fermi first described Weak interactions. Using a simple ratio method on dimuon events, we can set a 95% C.L. lower limit on the effective scale Lambda = 7.5 TeV (8.7 TeV) for the constructive Left-left Isoscalar Model of quark compositeness with 100 pb^-1 (200 pb^-1) of data at sqrt{s} = 10 TeV.Comment: To be published in the proceedings of DPF-2009, Detroit, MI, July 2009, eConf C09072
    • 

    corecore